input similarity
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
A Parametric UMAP's sampling and effective loss function In Parametric UMAP [
The loss is computed for this mini-batch and then the parameters of the neural network are updated via stochastic gradient descent. UMAP: First, since automatic differentiation is used, not only the head of a negative sample edge is repelled from the tail but both repel each other. Second, the same number of edges are sampled in each epoch. This leads to a different repulsive weight for Parametric UMAP as described in Theorem A.1. Parametric UMAP's negative sampling is uniform from a batch that is itself sampled Since UMAP's implementation considers a point its first nearest neighbor, but the C Computing the expected gradient of UMAP's optimization procedure In this appendix, we show that the expected update in UMAP's optimization scheme does not It is continuously differentiable unless two embedding points coincide.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
Reviews: Input Similarity from the Neural Network Perspective
All of the reviewers found the proposed technique original and the theory interesting. The reviewers initially had concerns regarding the structure of the paper, relevance of some of the experiments, and comparison with perceptual loss. These concerns are alleviated given the author feedback. Assuming that the authors will integrate the author feedback into the paper and incorporate all of reviewers' feedback, I recommend acceptance as a poster.
Input Similarity from the Neural Network Perspective
Given a trained neural network, we aim at understanding how similar it considers any two samples. For this, we express a proper definition of similarity from the neural network perspective (i.e. We study the mathematical properties of this similarity measure, and show how to estimate sample density with it, in low complexity, enabling new types of statistical analysis for neural networks. We also propose to use it during training, to enforce that examples known to be similar should also be seen as similar by the network. We then study the self-denoising phenomenon encountered in regression tasks when training neural networks on datasets with noisy labels.
Unnatural language processing: How do language models handle machine-generated prompts?
Kervadec, Corentin, Franzon, Francesca, Baroni, Marco
Language model prompt optimization research has shown that semantically and grammatically well-formed manually crafted prompts are routinely outperformed by automatically generated token sequences with no apparent meaning or syntactic structure, including sequences of vectors from a model's embedding space. We use machine-generated prompts to probe how models respond to input that is not composed of natural language expressions. We study the behavior of models of different sizes in multiple semantic tasks in response to both continuous and discrete machine-generated prompts, and compare it to the behavior in response to human-generated natural-language prompts. Even when producing a similar output, machine-generated and human prompts trigger different response patterns through the network processing pathways, including different perplexities, different attention and output entropy distributions, and different unit activation profiles. We provide preliminary insight into the nature of the units activated by different prompt types, suggesting that only natural language prompts recruit a genuinely linguistic circuit.
- North America > Dominican Republic (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Asia > China > Hong Kong (0.04)
- (8 more...)
On UMAP's true loss function
Damrich, Sebastian, Hamprecht, Fred A.
UMAP has supplanted t-SNE as state-of-the-art for visualizing high-dimensional datasets in many disciplines, but the reason for its success is not well understood. In this work, we investigate UMAP's sampling based optimization scheme in detail. We derive UMAP's effective loss function in closed form and find that it differs from the published one. As a consequence, we show that UMAP does not aim to reproduce its theoretically motivated high-dimensional UMAP similarities. Instead, it tries to reproduce similarities that only encode the shared $k$ nearest neighbor graph, thereby challenging the previous understanding of UMAP's effectiveness. Instead, we claim that the key to UMAP's success is its implicit balancing of attraction and repulsion resulting from negative sampling. This balancing in turn facilitates optimization via gradient descent. We corroborate our theoretical findings on toy and single cell RNA sequencing data.